Convolutional Neural Networks (CNN) are used for image forensics because of their high recognizable property, easy understanding, and strong learnability. However, their inherent disadvantages of the receptive field increasing slowly and neglecting long-range dependencies, and high computational cost cause the unsatisfactory accuracy and lightweight deployment of deep learning algorithms. To solve the above problems, a lightweight network-based image copy-paste tamper detection algorithm namely LKA-EfficientNet (Large Kernel Attention EfficientNet) was proposed. The characteristics of long-range dependencies and global receptive field were contained in LKA-EfficientNet, and the number of EfficientNetV2 parameters was optimized. As a result, the localization speed and detection accuracy of image tamper were improved. Firstly, the image was inputted into and processed in the backbone network based on Large Kernel Attention (LKA) to obtain the candidate feature maps. Then, the feature maps of different scales were used to construct the feature pyramid for feature matching. Finally, the candidate feature maps after feature matching were fused to locate the tampered area of the image. In addition, the triple cross entropy loss function was used by LKA-EfficientNet to further improve the accuracy of the algorithm in image tamper localization. Experimental results show that LKA-EfficientNet can not only reduce the floating-point operations by 29.54% but also increase the F1 by 4.88% compared to the same type algorithm — Dense-InceptionNet. The above verifies that LKA-EfficientNet can reduce computational cost and maintain high detection performance at the same time.
On the premise of ensuring the properties of nodes in the network, the purpose of attribute network representation learning is to learn the low-dimensional dense vector representation of nodes by combining structure and attribute information. In the existing attribute network representation learning methods, the learning of attribute information in the network is ignored, and the interaction of attribute information with the network topology is insufficient, so that the network structure and attribute information cannot be fused efficiently. In response to the above problems, a Dual auto-Encoder Network Representation Learning (DENRL) algorithm was proposed. Firstly, the high-order neighborhood information of nodes was captured through a multi-hop attention mechanism. Secondly, a low-pass Laplacian filter was designed to remove the high-frequency signals and iteratively obtain the attribute information of important neighbor nodes. Finally, an adaptive fusion module was constructed to increase the acquisition of important information through the consistency and difference constraints of the two kinds of information, and the encoder was trained by supervising the joint reconstruction loss function of the two auto-encoders. Experimental results on Cora, Citeseer, Pubmed and Wiki datasets show that DENRL algorithm has the highest clustering accuracy and the lowest algorithm running time on three citation network datasets compared with DeepWalk, ANRL (Attributed Network Representation Learning) and other algorithms, achieves these two indicators of 0.775 and 0.460 2 s respectively on Cora datasets, and has the highest link prediction precision on Cora and Citeseer datasets, reaching 0.961 and 0.970 respectively. It can be seen that the fusion and interactive learning of attribute and structure information can obtain stronger node representation capability.
The current supervised face forgery video detection methods need a large amount of labeled data. In order to solve the practical problems of fast iteration and many kinds of video forgery methods, the unsupervised idea in temporal anomaly detection was introduced into face forgery video detection, the face forgery video detection task was transformed into unsupervised video anomaly detection task, and an unsupervised face forgery video detection method based on reconstruction error was proposed. Firstly, the facial landmark sequence of continuous frames in the video to be detected was extracted. Secondly, the facial landmark sequence in the video to be detected was reconstructed based on multi-granularity information such as deviation features, local features and temporal features. Thirdly, the reconstruction error between the original sequence and the reconstructed sequence was calculated. Finally, the score was calculated according to the peak frequency of the reconstruction error to detect the forgery video automatically. Experimental results show that compared with detection methods such as LRNet (Landmark Recurrent Network) and Xception-c23, the proposed method has the AUC (Area Under Curve) of the detection performance increased by up to 27.6%, and the AUC of the transplantation performance increased by 30.4%.
For the problems of limited bandwidth resources, the existence of external disturbance and parameter uncertainty, a non-fragile dissipative control scheme for event-triggered networked systems was proposed. Firstly, based on the Networked Control System (NCS) model, a non-periodic sampling event-triggered scheme was proposed, and a delay closed-loop system model was established. Then, a novel bilateral Lyapunov functional was constructed by using the structure characteristics of sawtooth wave. Finally, the sufficient conditions to ensure the stability of the system were derived by using methods such as Jensen inequality, free weight matrix and convex combination, and the gain of the feedback controller was calculated. The results of numerical simulation show that the proposed bilateral functional is less conservative than the unilateral functional, the event-triggered mechanism can save bandwidth compared with the common sampling mechanism, and the proposed controller is feasible.
For that the Teaching-Learning-Based Optimization (TLBO) algorithm has some problems, such as prematurity and poor solution accuracy, in solving high-dimensional optimization problems, an Improved TLBO algorithm with Adaptive Competitive learning (ITLBOAC) was proposed. Firstly, a weighted parameter with nonlinear change was introduced into the “teaching” operator to determine the ability of the current individual to maintain its own state and adjust the attitude of the current individual towards learning from teachers. As a result, the current individual learnt more from the teacher in the early stage to improve its own state quickly, and kept the state of itself more in the later stage to slow down the influence of the teacher on it. Then, based on ecological cooperation and competition mechanisms, a “learning” operator based on adaptive competition between nearest neighbor individuals was introduced. To make the current individual chose its near neighbors and the individuals eventually shifted from cooperative evolution to competitive learning. Test results on 12 Benchmark test functions show that compared with four improved TLBO algorithms, the proposed algorithm is better in terms of accuracy of solutions, stability and convergence speed, and is much better than TLBO algorithm at the same time, which verify that the proposed algorithm is suitable for solving high-dimensional continuous optimization problems. Test results with compression spring and three-bar truss design problems selected to test show that the optimal values obtained by ITLBOAC decreased by 3.03% and 0.34% respectively, compared with those obtained by TLBO algorithm. It can be seen that ITLBOAC is a trustworthy algorithm in solving constrained engineering optimization problems.
With the development of high-throughput sequencing technology, massive genome sequence data provide a data basis to understand the structure of genome. As an essential part of genomics research, splice site identification plays a vital role in gene discovery and determination of gene structure, and is of great importance for understanding the expression of gene traits. To address the problem that existing models cannot extract high-dimensional features of DNA (DeoxyriboNucleic Acid) sequences sufficiently, a splice site prediction model consisted of BERT (Bidirectional Encoder Representations from Transformers) and parallel Convolutional Neural Network (CNN) was constructed, namely BERT-splice. Firstly, the DNA language model was trained by BERT pre-training method to extract the contextual dynamic association features of DNA sequences and map DNA sequence features with a high-dimensional matrix. Then, the DNA language model was used to map the human reference genome sequence hg19 data into a high-dimensional matrix, and the result was adopted as input of parallel CNN classifier for retraining. Finally, a splice site prediction model was constructed on the basis of the above. Experimental results show that the prediction accuracy of BERT-splice model is 96.55% on the donor set of DNA splice sites and 95.80% on the acceptor set, which improved by 1.55% and 1.72% respectively, compared to that of the BERT and Recurrent Convolutional Neural Network (RCNN) constructed prediction model BERT-RCNN. Meanwhile, the average False Positive Rate (FPR) of donor/acceptor splice sites tested on five complete human gene sequences is 4.74%. The above verifies that the effectiveness of BERT-splice model for gene splice site prediction.
Aiming at the complicated and time-consuming problems of brushed Direct-Current (DC) motor Proportion Integral Differential (PID) parameter tuning, a PID parameter tuning method based on improved Genetic Algorithm (GA) was proposed. Firstly, a fitness enhanced elimination through selection rule was proposed, which improved the selection process of traditional GA. Then, a gene infection crossover method was proposed to ensure the increase of the average fitness value in the evolution process. Finally, the unnecessary copy operation in traditional GA was deleted to improve the running speed of the algorithm. Modeling and simulation analysis were carried out through the motor transfer function. Experimental results show that, compared with conventional tuning methods, the proposed improved GA can significantly improve the PID parameter tuning effect. At the same time, compared with the traditional GA, the improved GA reduces the evolutionary generation number required to achieve the same evolutionary effect by 79%, and increases the running speed of the algorithm by 4.1%. The proposed improved GA improves GA from the two key operation steps of selection and crossover, and is applied to PID parameter tuning to make the rise time less, the stability time shorter, and the overshoot smaller.
To solve the problems of large energy consumption and scarcity of spectrum resources in the traditional Internet of Things (IoT), a Multiple Input Multiple Output-Ambient Backscatter Communication (MIMO-AmBC) system model which is constructed by an ambient backscatter, a Cooperative Receiver (CRx) and ambient Radio Frequency (RF) source was proposed. First, the system model was analyzed by using the Parasitic Symbiotic Radio (PSR) scheme to derive the Signal-to-Noise Ratio (SNR). Secondly, the approximate expressions for the ergodic rates of the primary link and the backscatter link were derived, and the maximum expression for the ergodic rate of the backscatter link was obtained. Finally, the proposed system model was compared with the traditional cellular network and Commensal Symbiotic Radio (CSR) scheme. The experimental results verify the correctness of the theoretical derivation and give some meaningful conclusions:1) the backscatter link rate increases with the logarithm of the number of receiving antennas and has nothing to do with the number of transmitting antennas; 2) when the SNR is 10 dB, the sum rate of the PSR scheme is higher than those of the traditional scheme and the CSR scheme by 36.8% and 29.9% respectively. Although the primary link rate of the PSR scheme is 5.5% lower than that of the CSR scheme, the ergodic rate of the backscatter link is 7.7 times higher than that of the CSR scheme, which provides theoretical reference for choosing the AmBC symbiosis scheme for practical applications.
In order to improve the credibility of the damage degree evaluation to the aviation network due to cascading failures caused by emergency, considering the redundancy ability of airport nodes for the load, which means if the overload occurs in a certain spatial range, the node will not fail immediately but has a certain overload handling ability, an aviation network cascading failure model was proposed based on overload condition and failure probability. Firstly, the overload coefficient, weight coefficient, distribution coefficient, and capacity coefficient were introduced into the traditional "load-capacity" Motter-Lai cascading failure model. Then, the redundant capacity characteristics of network nodes were described by overload condition and failure probability, and different load redistribution strategies were applied to the failed and overloaded nodes to make the model more consistent with the aviation network reality. Theoretical analysis and simulation results show that increasing the overload coefficient within a certain range helps to reduce the impact of cascading failures, but the improvement effect is not obvious after increasing to a certain degree; with the optimal intervals for parameters of the model. the aviation network can maintain better robustness while spending smaller construction cost, and the optimized allocation of aviation network resources can improve the network’s resistance to cascading failures.
With the increasing scale and complexity of computer softwares, code defect in software has become a serious threat to public safety. Aiming at the problems of poor expansibility of static analysis tools, as well as coarse detection granularity and unsatisfactory detection effect of existing methods, a static code defect detection method based on program slicing and semantic feature fusion was proposed. Firstly, key points in source code were analyzed through data flow and control flow, and the program slicing method based on Interprocedural Finite Distributive Subset (IFDS) was adopted to obtain the code snippet composed of multiple lines of statements related to code defects. Then, semantically related vector representation of code snippet was obtained by word embedding, so that the appropriate length of code snippet was selected with the accuracy guaranteed. Finally, Text Convolutional Neural Network (TextCNN) and Bi-directional Gate Recurrent Unit (BiGRU) were used to extract local key features and context sequence features of the code snippet respectively, and the proposed method was used to detect slice-level code defects. Experimental results show that the proposed method can detect different types of code defects effectively, and is significantly better than static analysis tool Flawfinder. Under the premise of fine granularity, IFDS slicing method can further improve F1 score and accuracy,reach 89.64% and 92.08% respectively. Compared with the existing methods based on program slicing, when key points are the Application Programming Interface (API) or the variables, the proposed method has the F1 score reached 89.69% and 89.74% respectively, and the accuracy reached 92.15% and 91.98% respectively, and all of them are higher. It can be seen that without significantly increasing time complexity, the proposed method has a better comprehensive detection performance.
The traditional image encryption with scrambling-diffusion structure is usually divided into two independent steps of scrambling and diffusion, which are easy to be cracked separately, and the encryption process has weak nonlinearity, resulting in poor security of the algorithm. Therefore, a scrambling diffusion synchronous image encryption algorithm with strong nonlinearity was proposed. Firstly, a new sine-cos chaotic mapping was constructed to broaden the range of control parameters and improve the randomness of sequence distribution. Then, the exclusive-OR sum of plaintext pixels and chaotic sequence was used as the initial chaotic value to generate chaotic sequence, and this chaotic sequence was used to construct the network structures of different pixels of different plaintexts. At the same time, the diffusion value was used to dynamically update the network value to make the network dynamic. Finally, the single pixel serial scrambling-diffusion was used to generate cross-effect between scrambling and diffusion,and the overall synchronization of scrambling and diffusion, so as to effectively resist separation attacks. In addition, the pixel operations were transferred according to the network structure, which made the serial path nonlinear and unpredictable, thereby ensuring the nonlinearity and security of the algorithm. And the adjacent node pixels sum was used to perform dynamic diffusion in order to improve the correlation of the plaintext. Experimental results show that the proposed algorithm has high encryption security, strong plaintext sensitivity, and is particularly effective in anti-statistical attack, anti-differential attack and anti-plaintext attack.
Aiming at the problem of low clinical practicability and accuracy in the clinical diagnosis of myocardial infarction, an auxiliary diagnosis method of myocardial infarction based on 12-lead ElectroCardioGram (ECG) signal was proposed. Firstly, denoising and data enhancement were performed on the 12-lead ECG signals. Secondly, aiming at the ECG signals of each lead, the statistical features including standard deviation, kurtosis coefficient and skewness coefficient were extracted respectively to reflect the morphological characteristics of ECG signals, meanwhile the entropy features including Shannon entropy, sample entropy, fuzzy entropy, approximate entropy and permutation entropy were extracted to characterize the time and frequency spectrum complexity, the new mode generation probability, the regularity and the unpredictability of the ECG signal time series as well as detect the small changes of ECG signals. Thirdly, the statistical features and entropy features of ECG signals were fused. Finally, based on the random forest algorithm, the performance of algorithm was analyzed and verified in both intra-patient and inter-patient modes, and the cross-validation technology was used to avoid over-fitting. Experimental results show that, the accuracy and F1 value of the proposed method in the intra-patient modes are 99.98% and 99.99% respectively, the accuracy and F1 value of the proposed method in the inter-patient mode are 94.56% and 97.05% respectively; and compared with the detection method based on single-lead ECG, the detection of myocardial infarction with 12-lead ECG is more logical for doctors’ clinical diagnosis.
Aiming at the sharp increasing of data on the cloud caused by the development and popularization of cloud native technology as well as the bottlenecks of the technology in performance and stability, a Haystack-based storage system was proposed. With the optimization in service discovery, automatic fault tolerance and caching mechanism, the system is more suitable for cloud native business and meets the growing and high-frequent file storage and read/write requirements of the data acquisition, storage and analysis industries. The object storage model used by the system satisfies the massive file storage with high-frequency reads and writes. A simple and unified application interface is provided for business using the storage system, a file caching strategy is applied to improve the resource utilization, and the rich automated tool chain of Kubernetes is adopted to make this storage system easier to deploy, easier to expand, and more stable than other storage systems. Experimental results indicate that the proposed storage system has a certain performance and stability improvement compared with the current mainstream object storage and file systems in the situation of large-scale fragmented data storage with more reads than writes.
Due to problems of over-compression by a non-adaptive mapping function, and changes of perceived contrasts for luminance shift during mapping, a hierarchical tone-mapping algorithm for detail-preserving was proposed. In this algorithm, the luminance-response curve adapting to each local luminance in High Dynamic Range (HDR) images, as a mapping function, was used to map luminances of the base layer. Then, compensation coefficients of the detail layer, for stretching or compressing details, were computed according to values of luminance shift based on Stevens' effect. The experimental results show that the proposed algorithm has good performance on preserving perceived details.